Regression and model validation
The data “learning2014” contains information about students’ achievements and learning approaches. The study was done in an Introductory Statistics Course in Finland in 2014, mainly from students of social sciences, from two different courses (part 1 and part 2). This data, learning2014, is from the second part, and it contains altogether 180 respondents, which over 2/3 are women and mean age is 25. The learning approaches are divided in three categories. (1) deep: seekinh meaning, relating ideas, use of evidence; (2) surface: lack of purpose, unrelated memorizing, Syllabus-boundess; (3) and strategic: organized studying, time managment. (Vehkalahti, 2015.) Below you can find the structure and dimensions of the dataset. The data contains 166 observations and seven variables: age, attitude towards statistics, gender, learning aprroaches times three (deep = deep, surface = surf and strategic = stra) and students’ ecxam points.
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ Age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ Attitude: int 37 31 25 35 37 38 35 29 38 21 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ Points : int 25 12 24 10 22 21 21 31 24 26 ...
## [1] 166 7
Next I will present a graphical overview of the dataset “learning2014”. The graph below gives hints of correlations between variables. According to the graph, it seems that there are rather little correlations between the learning approaches, gender and age with the exam points. Contrary, the strongest correlation is with the atttitude and points.

Based on the graphical overview of the data, it seems that attitude correlates greatly with points. Let’s take a closer look of all the variables:
age: the youngest respondents is 17-year-old, and oldest 55-year-old, mean age being 25,5-year-old.
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 17.00 21.00 22.00 25.51 27.00 55.00
attitude towards statistics: the attitude is shown in likert-scale 1-5. Amond the students the lowest attitude values was 1.4 and highest 5.
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.400 2.600 3.200 3.143 3.700 5.000
Learning approaches (deep, strategic and surface) are each measusered in likert-scale 1-5:
- deep learning approach: the lowest values given is 1.6 and the highest 4.9. Mean is 3.7. Altogether 157/166 respodents.
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.583 3.333 3.667 3.680 4.083 4.917
(2) strategic learning approach: the lowest value given is 1.2 and the highest 5.0. Mean is 3.1. Altohether 161/166 respodents.
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.250 2.625 3.188 3.121 3.625 5.000
(3) surface learning approach: the lowest value given is 1.6 and the highest 4.3. Mean is 2.8. Altogether 157/166 respodents.
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.583 2.417 2.833 2.787 3.167 4.333
Of all the learning approaches the approach deep seems to get the highest (3.7) median value, and the approach surface the lowest (2.8).
Let’s take a look at the relationship graphically: attitude indeed seems to have certain correlations with the points, unlike learning approach (deep). Surprinsingly the age and learning approach surface seems to have a small correlation among the male (and slightly with female) students. The older the student is the less he is likely to use learning approach “surface”. Similar correlation did not happen with other two learning approaches (deep and strategic).


Regression analysis: relationship of attitude, deep learning approach, age, and exam points as a dependent
No other variables (i.e. any learning approaches, age nor gender) have statisctical signifance with the students’ exam points other than attitude. Thus, since there is no statistical significanse between age and deep learning approach, I will remove those two variables, and create a model2, with only attitude as independent variable and exam point as dependent variable. Although should we ignore the variables that does not have statistical significance, perhaps not. In the articel “Statistical errors. P values, ‘the gold standard’ of statistical validity, are not as reliable as many scientist assume”, Regina Nuzzo summarizes how nowadays P value of 0.01 means for a lot of scientists that there is just a 1% of chance that the result is being a false. However, p value does not tell that. All what P value is doing, is summarizing the data by asuuming a spexifix null hypothesis (=that there is no correaltion between these variables, but our world does not work like that, usually variables do correlate). (Nuzzo, 2014.)
P value tells us yet something about existing correlation. Therefore, let’s take a closer look at regression model “model2”, where attitude towards statistics is an independent variable (since it was the only one with a statistical significance) and exam points a dependent variable:
model2 <- lm(Points ~ Attitude, data = learning2014)
summary(model2)
##
## Call:
## lm(formula = Points ~ Attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.63715 1.83035 6.358 1.95e-09 ***
## Attitude 0.35255 0.05674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
According to the model2, we can say that it seems like that, when the students’ attitude towards statistics rises also her/his exam point rises. The result is statistically significant (nevertheless, it is good to keep in mind Nuzzo’s argument of P value). The multiple R-squared of model2 is 0.19 (19%). R-squared measures how close the data is to the fitted regression line. It is the percentage of the response variable variation i.e. dependent variable, in model2 that is exam points, that is explained by a linear model. (Frost, 2013) Conclusively, attitude variable explains some variation of examp points, not extensively though.
Next I will provide following diagnostic plots: (1) Residuals vs Fitted values, (2) Normal QQ-plot and (3) Residuals vs Leverage. Residuals vs fitted values plot tells us how well the model (in this case model2) fits the data, and it could fit better. We find pretty equally spread residuals around horizontal line in Fitted values- plot, it is a indication that there is not non-linear relationship between exam points and attitude. In comparison, normal QQ-plot shows us if the residuals are normally distributed. According to the Tehroeticial Quonatiles - plot (below) shows how residuals are lined well on the straight dashed line, which means that they are quite normally distributed. The last one, Residual vs leverage plot shows possible influential cases. According to the Levarage -plot shows that there is not influential case(s), you can easily see that all the cases are inside Cook’s distance lines (if there was an influential case, it would show in the upper and lower right corner).


References:
Nuzzo, R. (2014). Statistical errors. P values, ‘the gold standard’ of statistical validity, are not as reliable as many scientist assume. Nature. 306. 150-152.
Vehkalahti, K. (2015). The relationship between learning approaches and students’ achievements in an introductory statistics course in Finland. 60th World Statistics Congress, ISI2015. Rio de Janeiro.
Logistic regression
Next I will analyze the relationship between low alcohol consumption age, sex, whether the student wants to aim higher education (higher) and final grade (G3). The hypothesis is that the younger the person is, and the higher of the final grade is, the less alcohol the student consumes. Moreover, my hypothesis is that the female students uses less alcohol and if the student aims to receive higher education in future, he/she uses less alcohol.
aims for higher education summary
## no yes
## 18 364
Gender and age summaries
## F M
## 198 184
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 15.00 16.00 17.00 16.59 17.00 22.00
final grade summary
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00 10.00 12.00 11.46 14.00 18.00
Below I have used logistic regression to study the same variables as above (i.e. relationship between alcohol consumption and age, gender, final grade and person’s aim for higher education). According to the logistic regression, the results are rather similar than with ordinary regression analysis. Although the confidence intervals shows that there, indeed, seem to be a strong correlation with the person’s gender and higher alcohol use (which is also agreeing with the hypothesis).
##
## Call: glm(formula = high_use ~ higher + age + sex + G3, family = "binomial",
## data = alc)
##
## Coefficients:
## (Intercept) higheryes age sexM G3
## -3.40256 -0.06768 0.17849 0.88678 -0.07279
##
## Degrees of Freedom: 381 Total (i.e. Null); 377 Residual
## Null Deviance: 465.7
## Residual Deviance: 441.2 AIC: 451.2
##
## Call:
## glm(formula = high_use ~ higher + age + sex + G3, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.5339 -0.8527 -0.6714 1.2195 2.0443
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.40256 1.92863 -1.764 0.077692 .
## higheryes -0.06768 0.53866 -0.126 0.900008
## age 0.17849 0.10130 1.762 0.078087 .
## sexM 0.88678 0.23699 3.742 0.000183 ***
## G3 -0.07279 0.03687 -1.974 0.048379 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 441.20 on 377 degrees of freedom
## AIC: 451.2
##
## Number of Fisher Scoring iterations: 4
## (Intercept) higheryes age sexM G3
## -3.40256416 -0.06768339 0.17848874 0.88678017 -0.07279062
## Waiting for profiling to be done...
## OR 2.5 % 97.5 %
## (Intercept) 0.03328781 0.0007336692 1.4376012
## higheryes 0.93455632 0.3231410944 2.7357882
## age 1.19540942 0.9811211394 1.4609460
## sexM 2.42730155 1.5322075526 3.8860179
## G3 0.92979549 0.8641844894 0.9989948
Next I will use only the variable, that according to my logistic regression model had a statistical relationship with high alcohol consumption, those were age, final grade and gender, and by using these variables I provide cross tabulation of predictions versus actual values - as you can see the graph belowe, the prediction valua of the model is rather good.
##
## Call:
## glm(formula = high_use ~ age + G3 + sex, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.5458 -0.8533 -0.6718 1.2150 2.0483
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -3.49251 1.79024 -1.951 0.051073 .
## age 0.18078 0.09962 1.815 0.069564 .
## G3 -0.07412 0.03533 -2.098 0.035907 *
## sexM 0.89183 0.23359 3.818 0.000135 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 441.21 on 378 degrees of freedom
## AIC: 449.21
##
## Number of Fisher Scoring iterations: 4
## prediction
## high_use FALSE TRUE
## FALSE 260 8
## TRUE 102 12

Clustering and classification
The data I will be using in the following assignment is called Boston and it is about the housing values in suburbs of Boston. There are 14 variables of the data are: crim -crime rate; zn - proportion of residential land; indus - proportion of non-retail business acres; chas - Charles River dummy variable (= 1 if tract bounds river; 0 otherwise); nox - nitrogen oxides concentration; rm -average number of rooms per dwelling;age - proportion of owner-occupied units built prior to 1940; dis - weighted mean of distances to five Boston employment centres; rad - index of accessibility to radial highways; tax - full-value property-tax rate per $10,000; ptratio- pupil-teacher ratio by town; black- 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town; lstat -lower status of the population (percent); medv - median value of owner-occupied homes in $1000s. Below you can see the structure and dimensions of the data.
if (!require("tidyverse")) {
install.packages("tidyverse", repos="http://cran.rstudio.com/")
library("tidyverse")
}
## Loading required package: tidyverse
## Loading tidyverse: tibble
## Loading tidyverse: tidyr
## Loading tidyverse: readr
## Loading tidyverse: purrr
## Conflicts with tidy packages ----------------------------------------------
## filter(): dplyr, stats
## lag(): dplyr, stats
if (!require("corrplot")) {
install.packages("corrplot", repos="http://cran.rstudio.com/")
library("corrplot")
}
## Loading required package: corrplot
if (!require("ggplot2")) {
install.packages("ggplot2", repos="http://cran.rstudio.com/")
library("ggplot2")
}
if (!require("MASS")) {
install.packages("MASS", repos="http://cran.rstudio.com/")
library("MASS")
}
## Loading required package: MASS
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
if (!require("dplyr")) {
install.packages("dplyr", repos="http://cran.rstudio.com/")
library("dlpyr")
}
if (!require("GGally")) {
install.packages("GGally", repos="http://cran.rstudio.com/")
library("GGally")
}
library(psych)
##
## Attaching package: 'psych'
## The following objects are masked from 'package:ggplot2':
##
## %+%, alpha
library(tidyr)
describe(Boston)
## vars n mean sd median trimmed mad min max range
## crim 1 506 3.61 8.60 0.26 1.68 0.33 0.01 88.98 88.97
## zn 2 506 11.36 23.32 0.00 5.08 0.00 0.00 100.00 100.00
## indus 3 506 11.14 6.86 9.69 10.93 9.37 0.46 27.74 27.28
## chas 4 506 0.07 0.25 0.00 0.00 0.00 0.00 1.00 1.00
## nox 5 506 0.55 0.12 0.54 0.55 0.13 0.38 0.87 0.49
## rm 6 506 6.28 0.70 6.21 6.25 0.51 3.56 8.78 5.22
## age 7 506 68.57 28.15 77.50 71.20 28.98 2.90 100.00 97.10
## dis 8 506 3.80 2.11 3.21 3.54 1.91 1.13 12.13 11.00
## rad 9 506 9.55 8.71 5.00 8.73 2.97 1.00 24.00 23.00
## tax 10 506 408.24 168.54 330.00 400.04 108.23 187.00 711.00 524.00
## ptratio 11 506 18.46 2.16 19.05 18.66 1.70 12.60 22.00 9.40
## black 12 506 356.67 91.29 391.44 383.17 8.09 0.32 396.90 396.58
## lstat 13 506 12.65 7.14 11.36 11.90 7.11 1.73 37.97 36.24
## medv 14 506 22.53 9.20 21.20 21.56 5.93 5.00 50.00 45.00
## skew kurtosis se
## crim 5.19 36.60 0.38
## zn 2.21 3.95 1.04
## indus 0.29 -1.24 0.30
## chas 3.39 9.48 0.01
## nox 0.72 -0.09 0.01
## rm 0.40 1.84 0.03
## age -0.60 -0.98 1.25
## dis 1.01 0.46 0.09
## rad 1.00 -0.88 0.39
## tax 0.67 -1.15 7.49
## ptratio -0.80 -0.30 0.10
## black -2.87 7.10 4.06
## lstat 0.90 0.46 0.32
## medv 1.10 1.45 0.41
dim(Boston)
## [1] 506 14
Below I scaled the Boston dataset for standardize purposes. As you can see from the summary of scaled data variables have now, when scaled, higher values. I created a categorical variables of crime rate from the scaled crime rate.
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
class(boston_scaled)
## [1] "matrix"
boston_scaled <- as.data.frame(boston_scaled)
scaled_crim <- boston_scaled$crim
summary(scaled_crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419400 -0.410600 -0.390300 0.000000 0.007389 9.924000
bins <- quantile(scaled_crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
crime <- cut(scaled_crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
I split the original data to test and train sets so that allowed me to check how well the model actually work. The train set is for the training of the model and the test set is for predicting the new data. Below you can see the LDA plot that has crime rate as the target variable and all the other variables of the boston scaled dataset as predictor variables.
n <- nrow(boston_scaled)
ind <- sample(n, size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
lda.fit <- lda(crime ~ ., data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2648515 0.2500000 0.2376238 0.2475248
##
## Group means:
## zn indus chas nox rm
## low 0.96877921 -0.8958768 -0.12514775 -0.8648075 0.42097773
## med_low -0.06483671 -0.2747523 -0.03844192 -0.6024876 -0.13082732
## med_high -0.37826080 0.1837844 0.17879700 0.3395530 0.09143047
## high -0.48724019 1.0171519 -0.07547406 1.0683243 -0.35731321
## age dis rad tax ptratio
## low -0.8789102 0.8380911 -0.6920711 -0.7195717 -0.4016509
## med_low -0.3516190 0.3964458 -0.5588715 -0.4703816 -0.1020257
## med_high 0.4586884 -0.3632998 -0.4148156 -0.3069160 -0.2094511
## high 0.7998227 -0.8518968 1.6377820 1.5138081 0.7803736
## black lstat medv
## low 0.37657978 -0.77284744 0.497504959
## med_low 0.31213329 -0.13583330 -0.003674681
## med_high 0.04777822 0.05298863 0.131439237
## high -0.84729116 0.88313716 -0.745213521
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.09763754 0.73045940 -0.92846101
## indus -0.04049878 -0.26484850 0.42488791
## chas -0.08382399 -0.09937894 0.05486080
## nox 0.48511130 -0.65405113 -1.44020168
## rm -0.12282117 -0.04742360 -0.14820322
## age 0.22742768 -0.39677307 -0.11009070
## dis -0.07071697 -0.41132743 0.30931926
## rad 3.21189300 0.93885128 0.10980709
## tax -0.02158852 -0.03719265 0.50029310
## ptratio 0.13212314 0.01032222 -0.38809852
## black -0.11744226 0.02011224 0.09341764
## lstat 0.24778786 -0.32895198 0.30738019
## medv 0.19940262 -0.47132391 -0.21436872
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9513 0.0354 0.0134
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 2)

On below you can see a cross-table of the results of LDA prediction - as you can see the model does not predicts rather well the crime rates with the new data.
correct_classes <- test$crime
test <- dplyr::select(test, -crime)
lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 10 9 1 0
## med_low 2 15 8 0
## med_high 1 6 22 1
## high 0 0 0 27
I reloaded and standardized the original Boston dataset, calculated the distances between the obsevations (below) and ran k-mean algorithn on the dataset. For me, optimal number of clusters are 7. There are seven sensible clusters, groups, for this dataset.
data('Boston')
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668
## Median :-0.390280 Median :-0.48724 Median :-0.2109
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202
## chas nox rm age
## Min. :-0.2723 Min. :-1.4644 Min. :-3.8764 Min. :-2.3331
## 1st Qu.:-0.2723 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366
## Median :-0.2723 Median :-0.1441 Median :-0.1084 Median : 0.3171
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.:-0.2723 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059
## Max. : 3.6648 Max. : 2.7296 Max. : 3.5515 Max. : 1.1164
## dis rad tax ptratio
## Min. :-1.2658 Min. :-0.9819 Min. :-1.3127 Min. :-2.7047
## 1st Qu.:-0.8049 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876
## Median :-0.2790 Median :-0.5225 Median :-0.4642 Median : 0.2746
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6617 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058
## Max. : 3.9566 Max. : 1.6596 Max. : 1.7964 Max. : 1.6372
## black lstat medv
## Min. :-3.9033 Min. :-1.5296 Min. :-1.9063
## 1st Qu.: 0.2049 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median : 0.3808 Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.4332 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 0.4406 Max. : 3.5453 Max. : 2.9865
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4620 4.8240 4.9110 6.1860 14.4000
km <-kmeans(dist_eu, centers = 7)
pairs(boston_scaled, col = km$cluster)

Dimensionality reduction techniques
Altogether human data has 155 observation and 9 variables, which are following:
Country = country where the data has been collected
lifeExp = Life expectancy at birth
GNI = Gross National Income per capita
educationExp = Expected years of schooling
mortality = Maternal mortality ratio
repr.parliament = Percetange of female representatives in parliament
eduRatio = the ratio of Female and Male populations with secondary education in each country
labourRatio = the ratio of labour force participation of females and males in each country
birthRate = Adolescent birth rate
human <- read.csv(file = "/Applications/IODS-project/data/human1.csv", sep =",", header =TRUE)
human$X <- NULL
dim(human)
## [1] 155 9
describe(human)
## vars n mean sd median trimmed mad min max
## Country* 1 155 78.00 44.89 78.00 78.00 57.82 1.00 155.00
## lifeExp 2 155 71.65 8.33 74.20 72.40 7.56 49.00 83.50
## GNI* 3 155 78.00 44.89 78.00 78.00 57.82 1.00 155.00
## educationExp 4 155 13.18 2.84 13.50 13.24 2.97 5.40 20.20
## birthRate 5 155 47.16 41.11 33.60 41.62 35.73 0.60 204.80
## mortality 6 155 149.08 211.79 49.00 104.70 63.75 1.00 1100.00
## repr.parliament 7 155 20.91 11.49 19.30 20.32 11.42 0.00 57.50
## eduRatio 8 155 0.85 0.24 0.94 0.87 0.12 0.17 1.50
## labourRatio 9 155 0.71 0.20 0.75 0.73 0.17 0.19 1.04
## range skew kurtosis se
## Country* 154.00 0.00 -1.22 3.61
## lifeExp 34.50 -0.76 -0.15 0.67
## GNI* 154.00 0.00 -1.22 3.61
## educationExp 14.80 -0.20 -0.34 0.23
## birthRate 204.20 1.13 0.89 3.30
## mortality 1099.00 2.03 4.16 17.01
## repr.parliament 57.50 0.55 -0.10 0.92
## eduRatio 1.33 -0.76 0.55 0.02
## labourRatio 0.85 -0.87 0.05 0.02
Below you can see the summaries of the variables in the “human” data. Additionally, there is a graphical overview of the data. According to the pairs, there are some correlations between variables, e.g. positive correlation between life expectancy and education expectancy, and negative education expectaion and birth rate.
## Country lifeExp GNI educationExp
## Afghanistan: 1 Min. :49.00 1,123 : 1 Min. : 5.40
## Albania : 1 1st Qu.:66.30 1,228 : 1 1st Qu.:11.25
## Algeria : 1 Median :74.20 1,428 : 1 Median :13.50
## Argentina : 1 Mean :71.65 1,458 : 1 Mean :13.18
## Armenia : 1 3rd Qu.:77.25 1,507 : 1 3rd Qu.:15.20
## Australia : 1 Max. :83.50 1,583 : 1 Max. :20.20
## (Other) :149 (Other):149
## birthRate mortality repr.parliament eduRatio
## Min. : 0.60 Min. : 1.0 Min. : 0.00 Min. :0.1717
## 1st Qu.: 12.65 1st Qu.: 11.5 1st Qu.:12.40 1st Qu.:0.7264
## Median : 33.60 Median : 49.0 Median :19.30 Median :0.9375
## Mean : 47.16 Mean : 149.1 Mean :20.91 Mean :0.8529
## 3rd Qu.: 71.95 3rd Qu.: 190.0 3rd Qu.:27.95 3rd Qu.:0.9968
## Max. :204.80 Max. :1100.0 Max. :57.50 Max. :1.4967
##
## labourRatio
## Min. :0.1857
## 1st Qu.:0.5984
## Median :0.7535
## Mean :0.7074
## 3rd Qu.:0.8535
## Max. :1.0380
##


In order to explore the variables and their relationship more closely, I removed the GIN and country - variables. As above, also the graphical overview below support my earlier observation: positive correlation between life expectancy and education expectancy, similarly with birth rate and mortality.
require(MASS)
require(dplyr)
keep <- c("lifeExp", "educationExp", "birthRate", "mortality","repr.parliament", "eduRatio", "labourRatio")
human <- dplyr::select(human, one_of(keep))
p <- ggpairs(human, mapping = aes(alpha=0.3), lower = list(combo = wrap("facethist", bins = 20)))
p

cor_matrix <- cor(human)
corrplot(cor_matrix, method="circle", type="upper", cl.pos="b", tl.pos="d", tl.cex=0.6)

cor(human) %>%
corrplot()

The first two principal component dimensions (PC1 and PC2) interprets that correlation between the ratio of labour force participation of females and males in each country and percetange of female representatives in parliament is rather strong, stronger than PC2, which is the correlation between the ratio of Female and Male populations with secondary education in each country, expected years of schooling and life expectancy at birth.
## PC1 PC2 PC3 PC4 PC5 PC6 PC7
## 54.7 18.5 10.8 7.0 4.2 3.1 1.7

Lastly, I will take a look at the “tea” dataset from the package FactoMineR. Below you can see the structure and dimensions of the dataset. Altogether it has 300 observations and 36 variables. Since the 36 variables was a bit too much, I decided to use only few variables for the MC-analysis. The MCA graph indicates that people who use tea shops to purchase their tea, also drink loose leaf tea. In contrast, people who drink Earl grey -tea, use tea with milk and sugar and outside lunch hours. Naturally, also the tea drinkers who purchase their tea from both normal store and tea shops, also use both tea bags and loose leaf tea.
## [1] 300 36
## vars n mean sd median trimmed mad min max range
## breakfast* 1 300 1.52 0.50 2 1.52 0.00 1 2 1
## tea.time* 2 300 1.56 0.50 2 1.58 0.00 1 2 1
## evening* 3 300 1.66 0.48 2 1.70 0.00 1 2 1
## lunch* 4 300 1.85 0.35 2 1.94 0.00 1 2 1
## dinner* 5 300 1.93 0.26 2 2.00 0.00 1 2 1
## always* 6 300 1.66 0.48 2 1.70 0.00 1 2 1
## home* 7 300 1.03 0.17 1 1.00 0.00 1 2 1
## work* 8 300 1.29 0.45 1 1.24 0.00 1 2 1
## tearoom* 9 300 1.19 0.40 1 1.12 0.00 1 2 1
## friends* 10 300 1.35 0.48 1 1.31 0.00 1 2 1
## resto* 11 300 1.26 0.44 1 1.20 0.00 1 2 1
## pub* 12 300 1.21 0.41 1 1.14 0.00 1 2 1
## Tea* 13 300 1.86 0.58 2 1.83 0.00 1 3 2
## How* 14 300 1.62 0.92 1 1.49 0.00 1 4 3
## sugar* 15 300 1.48 0.50 1 1.48 0.00 1 2 1
## how* 16 300 1.55 0.70 1 1.44 0.00 1 3 2
## where* 17 300 1.46 0.67 1 1.32 0.00 1 3 2
## price* 18 300 3.86 2.16 5 3.95 1.48 1 6 5
## age 19 300 37.05 16.87 32 34.94 16.31 15 90 75
## sex* 20 300 1.41 0.49 1 1.38 0.00 1 2 1
## SPC* 21 300 3.63 1.95 3 3.62 2.97 1 7 6
## Sport* 22 300 1.60 0.49 2 1.62 0.00 1 2 1
## age_Q* 23 300 2.61 1.42 2 2.52 1.48 1 5 4
## frequency* 24 300 2.33 1.04 3 2.29 1.48 1 4 3
## escape.exoticism* 25 300 1.53 0.50 2 1.53 0.00 1 2 1
## spirituality* 26 300 1.31 0.46 1 1.27 0.00 1 2 1
## healthy* 27 300 1.30 0.46 1 1.25 0.00 1 2 1
## diuretic* 28 300 1.42 0.49 1 1.40 0.00 1 2 1
## friendliness* 29 300 1.19 0.40 1 1.12 0.00 1 2 1
## iron.absorption* 30 300 1.90 0.30 2 2.00 0.00 1 2 1
## feminine* 31 300 1.57 0.50 2 1.59 0.00 1 2 1
## sophisticated* 32 300 1.72 0.45 2 1.77 0.00 1 2 1
## slimming* 33 300 1.15 0.36 1 1.06 0.00 1 2 1
## exciting* 34 300 1.61 0.49 2 1.64 0.00 1 2 1
## relaxing* 35 300 1.62 0.49 2 1.65 0.00 1 2 1
## effect.on.health* 36 300 1.78 0.41 2 1.85 0.00 1 2 1
## skew kurtosis se
## breakfast* -0.08 -2.00 0.03
## tea.time* -0.25 -1.94 0.03
## evening* -0.66 -1.57 0.03
## lunch* -1.99 1.96 0.02
## dinner* -3.35 9.28 0.01
## always* -0.66 -1.57 0.03
## home* 5.48 28.16 0.01
## work* 0.92 -1.16 0.03
## tearoom* 1.55 0.39 0.02
## friends* 0.64 -1.59 0.03
## resto* 1.07 -0.86 0.03
## pub* 1.42 0.01 0.02
## Tea* 0.02 -0.21 0.03
## How* 1.05 -0.41 0.05
## sugar* 0.07 -2.00 0.03
## how* 0.86 -0.53 0.04
## where* 1.14 0.03 0.04
## price* -0.36 -1.65 0.12
## age 0.88 -0.10 0.97
## sex* 0.38 -1.86 0.03
## SPC* 0.09 -1.39 0.11
## Sport* -0.39 -1.85 0.03
## age_Q* 0.32 -1.30 0.08
## frequency* -0.09 -1.34 0.06
## escape.exoticism* -0.11 -2.00 0.03
## spirituality* 0.80 -1.36 0.03
## healthy* 0.87 -1.25 0.03
## diuretic* 0.32 -1.90 0.03
## friendliness* 1.55 0.39 0.02
## iron.absorption* -2.59 4.74 0.02
## feminine* -0.28 -1.93 0.03
## sophisticated* -0.96 -1.09 0.03
## slimming* 1.95 1.81 0.02
## exciting* -0.46 -1.79 0.03
## relaxing* -0.51 -1.75 0.03
## effect.on.health* -1.35 -0.19 0.02
## Warning: attributes are not identical across measure variables; they will
## be dropped

## 'data.frame': 300 obs. of 6 variables:
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## vars n mean sd median trimmed mad min max range skew kurtosis
## Tea* 1 300 1.86 0.58 2 1.83 0 1 3 2 0.02 -0.21
## How* 2 300 1.62 0.92 1 1.49 0 1 4 3 1.05 -0.41
## how* 3 300 1.55 0.70 1 1.44 0 1 3 2 0.86 -0.53
## sugar* 4 300 1.48 0.50 1 1.48 0 1 2 1 0.07 -2.00
## where* 5 300 1.46 0.67 1 1.32 0 1 3 2 1.14 0.03
## lunch* 6 300 1.85 0.35 2 1.94 0 1 2 1 -1.99 1.96
## se
## Tea* 0.03
## How* 0.05
## how* 0.04
## sugar* 0.03
## where* 0.04
## lunch* 0.02
## Warning: attributes are not identical across measure variables; they will
## be dropped

##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6
## Variance 0.279 0.261 0.219 0.189 0.177 0.156
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953
## Dim.7 Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.144 0.141 0.117 0.087 0.062
## % of var. 7.841 7.705 6.392 4.724 3.385
## Cumulative % of var. 77.794 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898
## cos2 v.test Dim.3 ctr cos2 v.test
## black 0.003 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 0.027 2.867 | 0.433 9.160 0.338 10.053 |
## green 0.107 -5.669 | -0.108 0.098 0.001 -0.659 |
## alone 0.127 -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 0.035 3.226 | 1.329 14.771 0.218 8.081 |
## milk 0.020 2.422 | 0.013 0.003 0.000 0.116 |
## other 0.102 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag 0.161 -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 0.478 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged 0.141 -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
